Dropout regularization

A form of regularization useful in training neural networks. Dropout regularization removes a random selection of a fixed number of the units in a network layer for a single gradient step. The more units dropped out, the stronger the regularization. This is analogous to training the network to emulate an exponentially large ensemble of smaller networks.1

For full details, see Dropout: A simple way to prevent neural networks from overfitting.

Footnotes

  1. developers.google.com/machine-learning/glossary#dropout_regularization

2024 © ak